We consider the nonstochastic multi-agent multi-armed bandit problem with agents collaborating via a communication network with delays. We show a lower bound for individual regret of all agents. We show that with suitable regularizers and communication protocols, a collaborative multi-agent \emph{follow-the-regularized-leader} (FTRL) algorithm has an individual regret upper bound that matches the lower bound up to a constant factor when the number of arms is large enough relative to degrees of agents in the communication graph. We also show that an FTRL algorithm with a suitable regularizer is regret optimal with respect to the scaling with the edge-delay parameter. We present numerical experiments validating our theoretical results and demonstrate cases when our algorithms outperform previously proposed algorithms.
translated by 谷歌翻译
我们考虑腐烂奖励的无限多臂匪徒问题,其中手臂的平均奖励是根据任意趋势在每次拉动的手臂上减小的,最大腐烂速率$ \ varrho = o(1)$。我们表明,这个学习问题具有$ \ omega(\ max \ {\ varrho^{1/3} t,\ sqrt {t} \})$ worst-case遗憾的遗憾下降下降,其中$ t $是$ t $。我们表明,匹配的上限$ \ tilde {o}(\ max \ {\ varrho^{1/3} t,\ sqrt {t} \})$,最多可以通过多元素来实现当算法知道最大腐烂速率$ \ varrho $时,一种使用UCB索引的算法,该算法使用UCB索引和一个阈值来决定是否继续拉动手臂或从进一步考虑中移除手臂。我们还表明,$ \ tilde {o}(\ max \ {\ varrho^{1/3} t,t^{3/4} \})$遗憾的上限可以通过不知道的算法来实现$ \ varrho $的值通过使用自适应UCB索引以及自适应阈值值。
translated by 谷歌翻译
在本文中,我们研究了一个多级多服务器排队系统,其具有代表作业和服务器的特征向量的Bilinear模型之后的作业服务器分配随机奖励。我们的目标是对oracle策略的遗憾最小化,该策略具有完整的系统参数信息。我们提出了一种调度算法,该算法使用线性强盗算法以及动态作业分配给服务器。对于基线设置,其中均值工作时间与所有作业相同,我们表明我们的算法具有子线性遗憾,以及在地平线时间内的平均队列长度上的子线性绑定。我们进一步示出了类似的界限在更一般的假设下保持,允许不同的作业类别的非相同均值工作时间和一组时变的服务器类。我们还表明,可以通过访问作业类的交通强度的算法来保证更好的遗憾和均值队列长度界限。我们呈现数值实验的结果,示出了我们算法的遗憾和平均队列长度依赖于各种系统参数,并将它们的性能与先前提出的算法进行比较,使用合成随机生成的数据和真实世界集群计算数据跟踪。
translated by 谷歌翻译
我们研究了一个单服务器调度问题,目的是最大程度地降低工作所产生的预期累积持有成本,在该计划中,调度程序未知定义随机工作成本的参数。我们考虑一个允许不同工作类别的一般设置,同一班级的工作在统计上相同的持有成本和服务时间,并且跨课程任意数量的工作数量。在每个时间步骤中,服务器都可以处理作业并观察尚未完成的工作的随机保留成本。我们考虑了一个基于学习的$ C \ MU $规则计划,该计划从固定持续时间的先发制期开始,作为学习阶段,并收集了有关工作的数据,它将切换到非抢占计划。我们的算法旨在处理平均职位持有成本的大小差距的实例,并实现近乎最佳的性能保证。遗憾评估了算法的性能,其中基准是当已知工作参数时,$ c \ mu $规则计划策略可能达到的最低持有成本。我们表现​​出遗憾的下限和算法,这些算法几乎获得了遗憾的上限。我们的数值结果证明了我们的算法的功效,并表明我们的遗憾分析几乎很紧张。
translated by 谷歌翻译
Parallel implementations of stochastic gradient descent (SGD) have received significant research attention, thanks to its excellent scalability properties. A fundamental barrier when parallelizing SGD is the high bandwidth cost of communicating gradient updates between nodes; consequently, several lossy compresion heuristics have been proposed, by which nodes only communicate quantized gradients. Although effective in practice, these heuristics do not always converge. In this paper, we propose Quantized SGD (QSGD), a family of compression schemes with convergence guarantees and good practical performance. QSGD allows the user to smoothly trade off communication bandwidth and convergence time: nodes can adjust the number of bits sent per iteration, at the cost of possibly higher variance. We show that this trade-off is inherent, in the sense that improving it past some threshold would violate information-theoretic lower bounds. QSGD guarantees convergence for convex and non-convex objectives, under asynchrony, and can be extended to stochastic variance-reduced techniques. When applied to training deep neural networks for image classification and automated speech recognition, QSGD leads to significant reductions in end-to-end training time. For instance, on 16GPUs, we can train the ResNet-152 network to full accuracy on ImageNet 1.8× faster than the full-precision variant. time to the same target accuracy is 2.7×. Further, even computationally-heavy architectures such as Inception and ResNet can benefit from the reduction in communication: on 16GPUs, QSGD reduces the end-to-end convergence time of ResNet152 by approximately 2×. Networks trained with QSGD can converge to virtually the same accuracy as full-precision variants, and that gradient quantization may even slightly improve accuracy in some settings. Related Work. One line of related research studies the communication complexity of convex optimization. In particular, [40] studied two-processor convex minimization in the same model, provided a lower bound of Ω(n(log n + log(1/ ))) bits on the communication cost of n-dimensional convex problems, and proposed a non-stochastic algorithm for strongly convex problems, whose communication cost is within a log factor of the lower bound. By contrast, our focus is on stochastic gradient methods. Recent work [5] focused on round complexity lower bounds on the number of communication rounds necessary for convex learning.Buckwild! [10] was the first to consider the convergence guarantees of low-precision SGD. It gave upper bounds on the error probability of SGD, assuming unbiased stochastic quantization, convexity, and gradient sparsity, and showed significant speedup when solving convex problems on CPUs. QSGD refines these results by focusing on the trade-off between communication and convergence. We view quantization as an independent source of variance for SGD, which allows us to employ standard convergence results [7]. The main differences from Buckw
translated by 谷歌翻译
Modeling lies at the core of both the financial and the insurance industry for a wide variety of tasks. The rise and development of machine learning and deep learning models have created many opportunities to improve our modeling toolbox. Breakthroughs in these fields often come with the requirement of large amounts of data. Such large datasets are often not publicly available in finance and insurance, mainly due to privacy and ethics concerns. This lack of data is currently one of the main hurdles in developing better models. One possible option to alleviating this issue is generative modeling. Generative models are capable of simulating fake but realistic-looking data, also referred to as synthetic data, that can be shared more freely. Generative Adversarial Networks (GANs) is such a model that increases our capacity to fit very high-dimensional distributions of data. While research on GANs is an active topic in fields like computer vision, they have found limited adoption within the human sciences, like economics and insurance. Reason for this is that in these fields, most questions are inherently about identification of causal effects, while to this day neural networks, which are at the center of the GAN framework, focus mostly on high-dimensional correlations. In this paper we study the causal preservation capabilities of GANs and whether the produced synthetic data can reliably be used to answer causal questions. This is done by performing causal analyses on the synthetic data, produced by a GAN, with increasingly more lenient assumptions. We consider the cross-sectional case, the time series case and the case with a complete structural model. It is shown that in the simple cross-sectional scenario where correlation equals causation the GAN preserves causality, but that challenges arise for more advanced analyses.
translated by 谷歌翻译
In addition to being a widely recognised novelist, Milan Kundera has also authored three pieces for theatre: The Owners of the Keys (Majitel\'e kl\'i\v{c}\r{u}, 1961), The Blunder (Pt\'akovina, 1967), and Jacques and his Master (Jakub a jeho p\'an, 1971). In recent years, however, the hypothesis has been raised that Kundera is the true author of a fourth play: Juro J\'ano\v{s}\'ik, first performed in a 1974 production under the name of Karel Steigerwald, who was Kundera's student at the time. In this study, we make use of supervised machine learning to settle the question of authorship attribution in the case of Juro J\'ano\v{s}\'ik, with results strongly supporting the hypothesis of Kundera's authorship.
translated by 谷歌翻译
Cross entropy loss has served as the main objective function for classification-based tasks. Widely deployed for learning neural network classifiers, it shows both effectiveness and a probabilistic interpretation. Recently, after the success of self supervised contrastive representation learning methods, supervised contrastive methods have been proposed to learn representations and have shown superior and more robust performance, compared to solely training with cross entropy loss. However, cross entropy loss is still needed to train the final classification layer. In this work, we investigate the possibility of learning both the representation and the classifier using one objective function that combines the robustness of contrastive learning and the probabilistic interpretation of cross entropy loss. First, we revisit a previously proposed contrastive-based objective function that approximates cross entropy loss and present a simple extension to learn the classifier jointly. Second, we propose a new version of the supervised contrastive training that learns jointly the parameters of the classifier and the backbone of the network. We empirically show that our proposed objective functions show a significant improvement over the standard cross entropy loss with more training stability and robustness in various challenging settings.
translated by 谷歌翻译
Industrial Internet of Things (IoT) systems increasingly rely on wireless communication standards. In a common industrial scenario, indoor wireless IoT devices communicate with access points to deliver data collected from industrial sensors, robots and factory machines. Due to static or quasi-static locations of IoT devices and access points, historical observations of IoT device channel conditions provide a possibility to precisely identify the device without observing its traditional identifiers (e.g., MAC or IP address). Such device identification methods based on wireless fingerprinting gained increased attention lately as an additional cyber-security mechanism for critical IoT infrastructures. In this paper, we perform a systematic study of a large class of machine learning algorithms for device identification using wireless fingerprints for the most popular cellular and Wi-Fi IoT technologies. We design, implement, deploy, collect relevant data sets, train and test a multitude of machine learning algorithms, as a part of the complete end-to-end solution design for device identification via wireless fingerprinting. The proposed solution is currently being deployed in a real-world industrial IoT environment as part of H2020 project COLLABS.
translated by 谷歌翻译
我们描述了关于多语言核心分辨率的CRAC 2022共享任务的获胜提交。我们的系统首先求解了提及检测,然后使用先进的最大化方法在检索到的跨度上链接,并且这两个任务均与共享变压器的权重进行微调。我们报告了微调各种预审预告额的结果。此贡献的中心是微调的多语言模型。我们发现了一个具有足够大的编码器的大型多语言模型,可以全面提高所有数据集的性能,因此不仅限于代表性不足的语言或类型上相对语言的群体。源代码可在https://github.com/ufal/crac2022-corpipe上获得。
translated by 谷歌翻译